It is known that neural networks have the problem of being over-confident when directly using the output label distribution to generate uncertainty measures. Existing methods mainly resolve this issue by retraining the entire model to impose the uncertainty quantification capability so that the learned model can achieve desired performance in accuracy and uncertainty prediction simultaneously. However, training the model from scratch is computationally expensive and may not be feasible in many situations. In this work, we consider a more practical post-hoc uncertainty learning setting, where a well-trained base model is given, and we focus on the uncertainty quantification task at the second stage of training. We propose a novel Bayesian meta-model to augment pre-trained models with better uncertainty quantification abilities, which is effective and computationally efficient. Our proposed method requires no additional training data and is flexible enough to quantify different uncertainties and easily adapt to different application settings, including out-of-domain data detection, misclassification detection, and trustworthy transfer learning. We demonstrate our proposed meta-model approach's flexibility and superior empirical performance on these applications over multiple representative image classification benchmarks.
translated by 谷歌翻译
我们研究了数据驱动的深度学习方法的潜力,即从观察它们的混合物中分离两个通信信号。特别是,我们假设一个信号之一的生成过程(称为感兴趣的信号(SOI)),并且对第二个信号的生成过程不了解,称为干扰。单通道源分离问题的这种形式也称为干扰拒绝。我们表明,捕获高分辨率的时间结构(非平稳性),可以准确地同步与SOI和干扰,从而带来了可观的性能增长。有了这个关键的见解,我们提出了一种域信息神经网络(NN)设计,该设计能够改善“现成” NNS和经典检测和干扰拒绝方法,如我们的模拟中所示。我们的发现突出了特定于交流领域知识的关键作用在开发数据驱动的方法方面发挥了作用,这些方法具有前所未有的收益的希望。
translated by 谷歌翻译
我们研究了单通道源分离(SCSS)的问题,并专注于环化信号,这些信号特别适用于各种应用领域。与经典的SCSS方法不同,我们考虑了一个仅可用源的示例而不是模型的设置,从而激发了数据驱动的方法。对于具有基本环化高斯成分的源模型,我们为任何基于模型或数据驱动的分离方法建立了可达到的均方误差(MSE)的下限。我们的分析进一步揭示了最佳分离和相关实施挑战的操作。作为一种计算吸引力的替代方案,我们建议使用U-NET体系结构进行深度学习方法,该方法与最低MSE估计器具有竞争力。我们在模拟中证明,有了合适的域信息架构选择,我们的U-NET方法可以通过大幅减少的计算负担来达到最佳性能。
translated by 谷歌翻译
我们通过专注于两个流行的转移学习方法,$ \ Alpha $ -weighted-ERM和两级eRM,提供了一种基于GIBBS的转移学习算法的泛化能力的信息 - 理论分析。我们的关键结果是使用输出假设和给定源样本的输出假设和目标训练样本之间的条件对称的KL信息进行精确表征泛化行为。我们的结果也可以应用于在这两个上述GIBBS算法上提供新的无分布泛化误差上限。我们的方法是多才多艺的,因为它还表征了渐近误差和渐近制度中这两个GIBBS算法的过度风险,它们分别收敛到$ \ alpha $ -winution-eRM和两级eRM。基于我们的理论结果,我们表明,转移学习的好处可以被视为偏差折衷,源分布引起的偏差和缺乏目标样本引起的差异。我们认为这一观点可以指导实践中转移学习算法的选择。
translated by 谷歌翻译
如果对准确的预测的置信度不足,则选择性回归允许弃权。通常,通过允许拒绝选项,人们期望回归模型的性能会以减少覆盖范围的成本(即预测较少的样本)的成本提高。但是,正如我们所显示的,在某些情况下,少数子组的性能可以减少,同时我们减少覆盖范围,因此选择性回归可以放大不同敏感亚组之间的差异。在这些差异的推动下,我们提出了新的公平标准,用于选择性回归,要求每个子组的性能在覆盖范围内降低。我们证明,如果特征表示满足充分性标准或为均值和方差进行校准,则与所提出的公平标准相比。此外,我们介绍了两种方法,以减轻子组之间的性能差异:(a)通过在高斯假设下正规化有条件相互信息的上限,以及(b)通过对条件均值和条件方差预测的对比度损失正规。这些方法的有效性已在合成和现实世界数据集上证明。
translated by 谷歌翻译
We aim to bridge the gap between our common-sense few-sample human learning and large-data machine learning. We derive a theory of human-like few-shot learning from von-Neuman-Landauer's principle. modelling human learning is difficult as how people learn varies from one to another. Under commonly accepted definitions, we prove that all human or animal few-shot learning, and major models including Free Energy Principle and Bayesian Program Learning that model such learning, approximate our theory, under Church-Turing thesis. We find that deep generative model like variational autoencoder (VAE) can be used to approximate our theory and perform significantly better than baseline models including deep neural networks, for image recognition, low resource language processing, and character recognition.
translated by 谷歌翻译
Multi-uncertainties from power sources and loads have brought significant challenges to the stable demand supply of various resources at islands. To address these challenges, a comprehensive scheduling framework is proposed by introducing a model-free deep reinforcement learning (DRL) approach based on modeling an island integrated energy system (IES). In response to the shortage of freshwater on islands, in addition to the introduction of seawater desalination systems, a transmission structure of "hydrothermal simultaneous transmission" (HST) is proposed. The essence of the IES scheduling problem is the optimal combination of each unit's output, which is a typical timing control problem and conforms to the Markov decision-making solution framework of deep reinforcement learning. Deep reinforcement learning adapts to various changes and timely adjusts strategies through the interaction of agents and the environment, avoiding complicated modeling and prediction of multi-uncertainties. The simulation results show that the proposed scheduling framework properly handles multi-uncertainties from power sources and loads, achieves a stable demand supply for various resources, and has better performance than other real-time scheduling methods, especially in terms of computational efficiency. In addition, the HST model constitutes an active exploration to improve the utilization efficiency of island freshwater.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
By transferring knowledge from large, diverse, task-agnostic datasets, modern machine learning models can solve specific downstream tasks either zero-shot or with small task-specific datasets to a high level of performance. While this capability has been demonstrated in other fields such as computer vision, natural language processing or speech recognition, it remains to be shown in robotics, where the generalization capabilities of the models are particularly critical due to the difficulty of collecting real-world robotic data. We argue that one of the keys to the success of such general robotic models lies with open-ended task-agnostic training, combined with high-capacity architectures that can absorb all of the diverse, robotic data. In this paper, we present a model class, dubbed Robotics Transformer, that exhibits promising scalable model properties. We verify our conclusions in a study of different model classes and their ability to generalize as a function of the data size, model size, and data diversity based on a large-scale data collection on real robots performing real-world tasks. The project's website and videos can be found at robotics-transformer.github.io
translated by 谷歌翻译
Deep learning (DL) has become a driving force and has been widely adopted in many domains and applications with competitive performance. In practice, to solve the nontrivial and complicated tasks in real-world applications, DL is often not used standalone, but instead contributes as a piece of gadget of a larger complex AI system. Although there comes a fast increasing trend to study the quality issues of deep neural networks (DNNs) at the model level, few studies have been performed to investigate the quality of DNNs at both the unit level and the potential impacts on the system level. More importantly, it also lacks systematic investigation on how to perform the risk assessment for AI systems from unit level to system level. To bridge this gap, this paper initiates an early exploratory study of AI system risk assessment from both the data distribution and uncertainty angles to address these issues. We propose a general framework with an exploratory study for analyzing AI systems. After large-scale (700+ experimental configurations and 5000+ GPU hours) experiments and in-depth investigations, we reached a few key interesting findings that highlight the practical need and opportunities for more in-depth investigations into AI systems.
translated by 谷歌翻译